File: cmu-user.info Node: Global Function Type Inference, Prev: Local Function Type Inference, Up: Type Inference, Next: Operation Specific Type Inference
Global Function Type Inference
------------------------------
As described in section *Note Function Types::, a global function type
(ftype) declaration places implicit type assertions on the call
arguments, and also guarantees the type of the return value. So
wherever a call to a declared function appears, there is no doubt as to
the types of the arguments and return value. Furthermore, Python will
infer a function type from the function's definition if there is no
`ftype' declaration. Any type declarations on the argument variables
are used as the argument types in the derived function type, and the
compiler's best guess for the result type of the function is used as the
result type in the derived function type.
This method of deriving function types from the definition implicitly assumes
that functions won't be redefined at run-time. Consider this example:
(defun foo-p (x)
(let ((res (and (consp x) (eq (car x) 'foo))))
(format t "It is ~:[not ~;~]foo." res)))
(defun frob (it)
(if (foo-p it)
(setf (cadr it) 'yow!)
(1+ it)))
Presumably, the programmer really meant to return `res' from `foo-p',
but he seems to have forgotten. When he tries to call do `(frob (list
'foo nil))', `frob' will flame out when it tries to add to a `cons'.
Realizing his error, he fixes `foo-p' and recompiles it. But when he
retries his test case, he is baffled because the error is still there.
What happened in this example is that Python proved that the result of
`foo-p' is `null', and then proceeded to optimize away the `setf' in
`frob'.
Fortunately, in this example, the error is detected at compile time due
to notes about unreachable code (?.) Still, some users may not want to
worry about this sort of problem during incremental development, so
there is a variable to control deriving function types.
-- Variable: *derive-function-types*
If true (the default), argument and result type information derived
from compilation of `defun's is used when compiling calls to that
function. If false, only information from `ftype' proclamations
will be used.
File: cmu-user.info Node: Operation Specific Type Inference, Prev: Global Function Type Inference, Up: Type Inference, Next: Dynamic Type Inference
Operation Specific Type Inference
---------------------------------
Many of the standard CMU Common Lisp functions have special type inference procedures
that determine the result type as a function of the argument types. For
example, the result type of `aref' is the array element type. Here are some
other examples of type inferences:
(logand x #xFF) => (unsigned-byte 8)
(+ (the (integer 0 12) x) (the (integer 0 1) y)) => (integer 0 13)
(ash (the (unsigned-byte 16) x) -8) => (unsigned-byte 8)
File: cmu-user.info Node: Dynamic Type Inference, Prev: Operation Specific Type Inference, Up: Type Inference, Next: Type Check Optimization
Dynamic Type Inference
----------------------
Python uses flow analysis to infer types in dynamically typed programs. For
example:
(ecase x
(list (length x))
...)
Here, the compiler knows the argument to `length' is a list, because the
call to `length' is only done when `x' is a list. The most significant
efficiency effect of inference from assertions is usually in type check
optimization.
Dynamic type inference has two inputs: explicit conditionals and
implicit or explicit type assertions. Flow analysis propagates these
constraints on variable type to any code that can be executed only after
passing though the constraint. Explicit type constraints come from ifs
where the test is either a lexical variable or a function of lexical
variables and constants, where the function is either a type predicate,
a numeric comparison or `eq'.
If there is an `eq' (or `eql') test, then the compiler will actually
substitute one argument for the other in the true branch. For example:
(when (eq x :yow!) (return x))
becomes:
(when (eq x :yow!) (return :yow!))
This substitution is done when one argument is a constant, or one argument has
better type information than the other. This transformation reveals
opportunities for constant folding or type-specific optimizations. If the test
is against a constant, then the compiler can prove that the variable is not
that constant value in the false branch, or `(not (member :yow!))'
in the example above. This can eliminate redundant tests, for example:
(if (eq x nil)
...
(if x a b))
is transformed to this:
(if (eq x nil)
...
a)
Variables appearing as `if' tests are interpreted as `(not (eq VAR
nil))' tests. The compiler also converts `=' into `eql' where possible.
It is difficult to do inference directly on `=' since it does implicit
coercions.
When there is an explicit `<' or `>' test on integer variables, the
compiler makes inferences about the ranges the variables can assume in
the true and false branches. This is mainly useful when it proves that
the values are small enough in magnitude to allow open-coding of
arithmetic operations. For example, in many uses of `dotimes' with a
`fixnum' repeat count, the compiler proves that fixnum arithmetic can be
used.
Implicit type assertions are quite common, especially if you declare function
argument types. Dynamic inference from implicit type assertions sometimes
helps to disambiguate programs to a useful degree, but is most noticeable when
it detects a dynamic type error. For example:
(defun foo (x)
(+ (car x) x))
results in this warning:
In: DEFUN FOO
(+ (CAR X) X)
==>
X Warning: Result is a LIST, not a NUMBER.
Note that Common Lisp's dynamic type checking semantics make dynamic type
inference useful even in programs that aren't really dynamically typed, for
example:
(+ (car x) (length x))
Here, `x' presumably always holds a list, but in the absence of a
declaration the compiler cannot assume `x' is a list simply because
list-specific operations are sometimes done on it. The compiler must
consider the program to be dynamically typed until it proves otherwise.
Dynamic type inference proves that the argument to `length' is always a
list because the call to `length' is only done after the list-specific
`car' operation.
File: cmu-user.info Node: Type Check Optimization, Prev: Dynamic Type Inference, Up: Type Inference
Type Check Optimization
-----------------------
Python backs up its support for precise type checking by minimizing the
cost of run-time type checking. This is done both through type
inference and though optimizations of type checking itself.
Type inference often allows the compiler to prove that a value is of the
correct type, and thus no type check is necessary. For example:
(defstruct foo a b c)
(defstruct link
(foo (required-argument) :type foo)
(next nil :type (or link null)))
(foo-a (link-foo x))
Here, there is no need to check that the result of `link-foo' is a `foo',
since it always is. Even when some type checks are necessary, type inference
can often reduce the number:
(defun test (x)
(let ((a (foo-a x))
(b (foo-b x))
(c (foo-c x)))
...))
In this example, only one `(foo-p x)' check is needed. This applies to a
lesser degree in list operations, such as:
(if (eql (car x) 3) (cdr x) y)
Here, we only have to check that `x' is a list once.
Since Python recognizes explicit type tests, code that explicitly protects
itself against type errors has little introduced overhead due to implicit type
checking. For example, this loop compiles with no implicit checks checks for
`car' and `cdr':
(defun memq (e l)
(do ((current l (cdr current)))
((atom current) nil)
(when (eq (car current) e) (return current))))
Python reduces the cost of checks that must be done through an optimization
called COMPLEMENTING. A complemented check for TYPE is simply a check
that the value is not of the type `(not TYPE)'. This is only
interesting when something is known about the actual type, in which case we can
test for the complement of `(and KNOWN-TYPE (not TYPE))', or the
difference between the known type and the assertion. An example:
(link-foo (link-next x))
Here, we change the type check for `link-foo' from a test for `foo' to a
test for:
(not (and (or foo null) (not foo)))
or more simply `(not null)'. This is probably the most important use of
complementing, since the situation is fairly common, and a `null' test
is much cheaper than a structure type test.
Here is a more complicated example that illustrates the combination of
complementing with dynamic type inference:
(defun find-a (a x)
(declare (type (or link null) x))
(do ((current x (link-next current)))
((null current) nil)
(let ((foo (link-foo current)))
(when (eq (foo-a foo) a) (return foo)))))
This loop can be compiled with no type checks. The `link' test for
`link-foo' and `link-next' is complemented to `(not null)', and then
deleted because of the explicit `null' test. As before, no check is
necessary for `foo-a', since the `link-foo' is always a `foo'. This
sort of situation shows how precise type checking combined with precise
declarations can actually result in reduced type checking.
File: cmu-user.info Node: Source Optimization, Prev: Type Inference, Up: Advanced Compiler Use and Efficiency Hints, Next: Tail Recursion
Source Optimization
===================
This section describes source-level transformations that Python does on
programs in an attempt to make them more efficient. Although source-level
optimizations can make existing programs more efficient, the biggest advantage
of this sort of optimization is that it makes it easier to write efficient
programs. If a clean, straightforward implementation is can be transformed
into an efficient one, then there is no need for tricky and dangerous hand